29 research outputs found

    Optimization of mesh hierarchies in Multilevel Monte Carlo samplers

    Full text link
    We perform a general optimization of the parameters in the Multilevel Monte Carlo (MLMC) discretization hierarchy based on uniform discretization methods with general approximation orders and computational costs. We optimize hierarchies with geometric and non-geometric sequences of mesh sizes and show that geometric hierarchies, when optimized, are nearly optimal and have the same asymptotic computational complexity as non-geometric optimal hierarchies. We discuss how enforcing constraints on parameters of MLMC hierarchies affects the optimality of these hierarchies. These constraints include an upper and a lower bound on the mesh size or enforcing that the number of samples and the number of discretization elements are integers. We also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymptotic behavior. To provide numerical grounds for our theoretical results, we apply these optimized hierarchies together with the Continuation MLMC Algorithm. The first example considers a three-dimensional elliptic partial differential equation with random inputs. Its space discretization is based on continuous piecewise trilinear finite elements and the corresponding linear system is solved by either a direct or an iterative solver. The second example considers a one-dimensional It\^o stochastic differential equation discretized by a Milstein scheme

    A Continuation Multilevel Monte Carlo algorithm

    Full text link
    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending when the required error tolerance is satisfied. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding weak and strong errors. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a non-trivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical results substantiate the above results and illustrate the corresponding computational savings in examples that are described in terms of differential equations either driven by random measures or with random coefficients

    Implementation and analysis of an adaptive multilevel Monte Carlo algorithm

    Get PDF
    We present an adaptive multilevel Monte Carlo (MLMC) method for weak approximations of solutions to Itô stochastic differential equations (SDE). The work [Oper. Res. 56 (2008), 607-617] proposed and analyzed an MLMC method based on a hierarchy of uniform time discretizations and control variates to reduce the computational effort required by a single level Euler-Maruyama Monte Carlo method from ( TOL -3 )O(TOL3){{{\mathcal {O}}({\mathrm {TOL}}^{-3})}} to ( TOL -2 log( TOL -1 ) 2 )O(TOL2log(TOL1)2){{{\mathcal {O}}({\mathrm {TOL}}^{-2}\log ({\mathrm {TOL}}^{-1})^{2})}} for a mean square error of ( TOL 2 )O(TOL2){{{\mathcal {O}}({\mathrm {TOL}}^2)}} . Later, the work [Lect. Notes Comput. Sci. Eng. 82, Springer-Verlag, Berlin (2012), 217-234] presented an MLMC method using a hierarchy of adaptively refined, non-uniform time discretizations, and, as such, it may be considered a generalization of the uniform time discretization MLMC method. This work improves the adaptive MLMC algorithms presented in [Lect. Notes Comput. Sci. Eng. 82, Springer-Verlag, Berlin (2012), 217-234] and it also provides mathematical analysis of the improved algorithms. In particular, we show that under some assumptions our adaptive MLMC algorithms are asymptotically accurate and essentially have the correct complexity but with improved control of the complexity constant factor in the asymptotic analysis. Numerical tests include one case with singular drift and one with stopped diffusion, where the complexity of a uniform single level method is ( TOL -4 )O(TOL4){{{\mathcal {O}}({\mathrm {TOL}}^{-4})}} . For both these cases the results confirm the theory, exhibiting savings in the computational cost for achieving the accuracy ( TOL )O(TOL){{{\mathcal {O}}({\mathrm {TOL}})}} from ( TOL -3 )O(TOL3){{{\mathcal {O}}({\mathrm {TOL}}^{-3})}} for the adaptive single level algorithm to essentially ( TOL -2 log( TOL -1 ) 2 )O(TOL2log(TOL1)2){{{\mathcal {O}}({\mathrm {TOL}}^{-2}\log ({\mathrm {TOL}}^{-1})^2)}} for the adaptive MLMC algorith

    MATHICSE Technical Report : Analysis of the discrete L2L^2 projection on polynomial spaces with random evaluations

    Get PDF
    We analyse the problem of approximating a multivariate function by discrete least-squares projection on a polynomial space starting from random, noise-free observations. An area of possible application of such technique is Uncertainty Quantification (UQ) for computational models. We prove an optimal convergence estimate, up to a logarithmic factor, in the monovariate case, when the observation points are sampled in a bounded domain from a probability density function bounded away from zero, provided the number of samples scales quadratically with the dimension of the polynomial space. Several numerical tests are presented both in the monovariate and multivariate case, confirming our theoretical estimates. The numerical tests also clarify how the convergence rate depends on the number of sampling points, on the polynomial degree, and on the smoothness of the target function

    MATHICSE Technical Report : Optimization of mesh hierarchies in multilevel Monte Carlo samplers

    Get PDF
    We perform a general optimization of the parameters in the Multilevel Monte Carlo (MLMC) discretization hierarchy based on uniform discretization methods with general approximation orders and computational costs. Moreover, we discuss extensions to non-uniform discretizations based on a priori renements and the effect of imposing constraints on the largest and/or smallest mesh sizes. We optimize geometric and non-geometric hierarchies and compare them to each other, concluding that the geometric hierarchies, when optimized, are nearly optimal and have the same asymptotic computational complexity. We discuss how enforcing domain constraints on parameters of MLMC hierarchies affects the opti- mality of these hierarchies. These domain constraints include an upper and lower bound on the mesh size or enforcing that the number of samples and the number of discretization elements are integers. We also discuss the optimal tolerance splitting between the bias and the statistical error contributions and its asymp- totic behavior. To provide numerical grounds for our theoretical results, we apply these optimized hierarchies together with the Continuation MLMC Algorithm [13] that we recently developed, to several examples. These include the approxima- tion of three-dimensional elliptic partial differential equations with random inputs based on FEM with either direct or iterative solvers and It^o stochastic differential equations based on the Milstein scheme

    MATHICSE Technical Report : A continuation multilevel Monte Carlo algorithm

    Get PDF
    We propose a novel Continuation Multi Level Monte Carlo (CMLMC) algorithm for weak approximation of stochastic models that are described in terms of differential equations either driven by random measures or with random coefficients. The CMLMC algorithm solves the given approximation problem for a sequence of decreasing tolerances, ending with the desired one. CMLMC assumes discretization hierarchies that are defined a priori for each level and are geometrically refined across levels. The actual choice of computational work across levels is based on parametric models for the average cost per sample and the corresponding weak and strong errors. These parameters are calibrated using Bayesian estimation, taking particular notice of the deepest levels of the discretization hierarchy, where only few realizations are available to produce the estimates. The resulting CMLMC estimator exhibits a nontrivial splitting between bias and statistical contributions. We also show the asymptotic normality of the statistical error in the MLMC estimator and justify in this way our error estimate that allows prescribing both required accuracy and confidence in the final result. Numerical examples substantiate the above results and illustrate the corresponding computational savings
    corecore